|
In calculus, Taylor's theorem gives an approximation of a ''k''-times differentiable function around a given point by a ''k''-th order Taylor polynomial. For analytic functions the Taylor polynomials at a given point are finite order truncations of its Taylor series, which completely determines the function in some neighborhood of the point. The exact content of "Taylor's theorem" is not universally agreed upon. Indeed, there are several versions of it applicable in different situations, and some of them contain explicit estimates on the approximation error of the function by its Taylor polynomial. Taylor's theorem is named after the mathematician Brook Taylor, who stated a version of it in 1712. Yet an explicit expression of the error was not provided until much later on by Joseph-Louis Lagrange. An earlier version of the result was already mentioned in 1671 by James Gregory. Taylor's theorem is taught in introductory level calculus courses and it is one of the central elementary tools in mathematical analysis. Within pure mathematics it is the starting point of more advanced asymptotic analysis, and it is commonly used in more applied fields of numerics as well as in mathematical physics. Taylor's theorem also generalizes to multivariate and vector valued functions on any dimensions ''n'' and ''m''. This generalization of Taylor's theorem is the basis for the definition of so-called jets which appear in differential geometry and partial differential equations. == Motivation == If a real-valued function ''f'' is differentiable at the point ''a'' then it has a linear approximation at the point ''a''. This means that there exists a function ''h''1 such that : Here : is the linear approximation of ''f'' at the point ''a''. The graph of is the tangent line to the graph of ''f'' at . The error in the approximation is : Note that this goes to zero a little bit faster than as ''x'' tends to ''a'', given the limiting behavior of ''h''1. If we wanted a better approximation to ''f'', we might instead try a quadratic polynomial instead of a linear function. Instead of just matching one derivative of ''f'' at ''a'', we can match two derivatives, thus producing a polynomial that has the same slope and concavity as ''f'' at ''a''. The quadratic polynomial in question is : Taylor's theorem ensures that the quadratic approximation is, in a sufficiently small neighborhood of the point ''a'', a better approximation than the linear approximation. Specifically, : Here the error in the approximation is : which, given the limiting behavior of ''h''2, goes to zero faster than as ''x'' tends to ''a''. Similarly, we get still better approximations to ''f'' if we use polynomials of higher degree, since then we can match even more derivatives with ''f'' at the selected base point. In general, the error in approximating a function by a polynomial of degree ''k'' will go to zero a little bit faster than as ''x'' tends to ''a''. This result is of asymptotic nature: it only tells us that the error ''Rk'' in an approximation by a ''k''-th order Taylor polynomial ''Pk'' tends to zero faster than any nonzero ''k''-th degree polynomial as ''x'' → ''a''. It does not tell us how large the error is in any concrete neighborhood of the center of expansion, but for this purpose there are explicit formulae for the remainder term (given below) which are valid under some additional regularity assumptions on ''f''. These enhanced versions of Taylor's theorem typically lead to uniform estimates for the approximation error in a small neighborhood of the center of expansion, but the estimates do not necessarily hold for neighborhoods which are too large, even if the function ''f'' is analytic. In that situation one may have to select several Taylor polynomials with different centers of expansion to have reliable Taylor-approximations of the original function (see animation on the right.) There are several things we might do with the remainder term: # Estimate the error in using a polynomial ''P''k(''x'') of degree ''k'' to estimate ''f''(''x'') on a given interval (''a'' - ''r'', ''a'' + ''r''). (The interval and the degree ''k'' are fixed; we want to find the error.) # Find the smallest degree ''k'' for which the polynomial ''P''k(''x'') approximates ''f''(''x'') to within a given error (or tolerance) on a given interval (''a'' - ''r'', ''a'' + ''r'') . (The interval and the error are fixed; we want to find the degree.) # Find the largest interval (''a'' - ''r'', ''a'' + ''r'') on which ''P''k(''x'') approximates ''f''(''x'') to within a given error ("tolerance"). (The degree and the error are fixed; we want to find the interval.) It is also possible that increasing the degree of the approximating polynomial does not increase the quality of approximation at all even if the function ''f'' to be approximated is infinitely many times differentiable. An example of this behavior is given below, and it is related to the fact that unlike analytic functions, more general functions are not (locally) determined by the values of their derivatives at a single point. 抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「Taylor's theorem」の詳細全文を読む スポンサード リンク
|